12 research outputs found
Building artificial neural circuits for domain-general cognition: a primer on brain-inspired systems-level architecture
There is a concerted effort to build domain-general artificial intelligence
in the form of universal neural network models with sufficient computational
flexibility to solve a wide variety of cognitive tasks but without requiring
fine-tuning on individual problem spaces and domains. To do this, models need
appropriate priors and inductive biases, such that trained models can
generalise to out-of-distribution examples and new problem sets. Here we
provide an overview of the hallmarks endowing biological neural networks with
the functionality needed for flexible cognition, in order to establish which
features might also be important to achieve similar functionality in artificial
systems. We specifically discuss the role of system-level distribution of
network communication and recurrence, in addition to the role of short-term
topological changes for efficient local computation. As machine learning models
become more complex, these principles may provide valuable directions in an
otherwise vast space of possible architectures. In addition, testing these
inductive biases within artificial systems may help us to understand the
biological principles underlying domain-general cognition.Comment: This manuscript is part of the AAAI 2023 Spring Symposium on the
Evaluation and Design of Generalist Systems (EDGeS
Erratum to "Neurocognitive reorganization between crystallized intelligence, fluid intelligence and white matter microstructure in two age-heterogeneous developmental cohorts" [Dev. Cogn. Neurosci. 41 (2020) 100743].
Despite the reliability of intelligence measures in predicting important life outcomes such as educational achievement and mortality, the exact configuration and neural correlates of cognitive abilities remain poorly understood, especially in childhood and adolescence. Therefore, we sought to elucidate the factorial structure and neural substrates of child and adolescent intelligence using two cross-sectional, developmental samples (CALM: N = 551 (N = 165 imaging), age range: 5-18 years, NKI-Rockland: N = 337 (N = 65 imaging), age range: 6-18 years). In a preregistered analysis, we used structural equation modelling (SEM) to examine the neurocognitive architecture of individual differences in childhood and adolescent cognitive ability. In both samples, we found that cognitive ability in lower and typical-ability cohorts is best understood as two separable constructs, crystallized and fluid intelligence, which became more distinct across development, in line with the age differentiation hypothesis. Further analyses revealed that white matter microstructure, most prominently the superior longitudinal fasciculus, was strongly associated with crystallized (gc) and fluid (gf) abilities. Finally, we used SEM trees to demonstrate evidence for developmental reorganization of gc and gf and their white matter substrates such that the relationships among these factors dropped between 7-8 years before increasing around age 10. Together, our results suggest that shortly before puberty marks a pivotal phase of change in the neurocognitive architecture of intelligence
Getting aligned on representational alignment
Biological and artificial information processing systems form representations
that they can use to categorize, reason, plan, navigate, and make decisions.
How can we measure the extent to which the representations formed by these
diverse systems agree? Do similarities in representations then translate into
similar behavior? How can a system's representations be modified to better
match those of another system? These questions pertaining to the study of
representational alignment are at the heart of some of the most active research
areas in cognitive science, neuroscience, and machine learning. For example,
cognitive scientists measure the representational alignment of multiple
individuals to identify shared cognitive priors, neuroscientists align fMRI
responses from multiple individuals into a shared representational space for
group-level analyses, and ML researchers distill knowledge from teacher models
into student models by increasing their alignment. Unfortunately, there is
limited knowledge transfer between research communities interested in
representational alignment, so progress in one field often ends up being
rediscovered independently in another. Thus, greater cross-field communication
would be advantageous. To improve communication between these fields, we
propose a unifying framework that can serve as a common language between
researchers studying representational alignment. We survey the literature from
all three fields and demonstrate how prior work fits into this framework.
Finally, we lay out open problems in representational alignment where progress
can benefit all three of these fields. We hope that our work can catalyze
cross-disciplinary collaboration and accelerate progress for all communities
studying and developing information processing systems. We note that this is a
working paper and encourage readers to reach out with their suggestions for
future revisions.Comment: Working paper, changes to be made in upcoming revision
The globalizability of temporal discounting
Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns
Recommended from our members
Neurocognitive reorganization between crystallized intelligence, fluid intelligence and white matter microstructure in two age-heterogeneous developmental cohorts.
Despite the reliability of intelligence measures in predicting important life outcomes such as educational achievement and mortality, the exact configuration and neural correlates of cognitive abilities remain poorly understood, especially in childhood and adolescence. Therefore, we sought to elucidate the factorial structure and neural substrates of child and adolescent intelligence using two cross-sectional, developmental samples (CALM: N = 551 (N = 165 imaging), age range: 5-18 years, NKI-Rockland: N = 337 (N = 65 imaging), age range: 6-18 years). In a preregistered analysis, we used structural equation modelling (SEM) to examine the neurocognitive architecture of individual differences in childhood and adolescent cognitive ability. In both samples, we found that cognitive ability in lower and typical-ability cohorts is best understood as two separable constructs, crystallized and fluid intelligence, which became more distinct across development, in line with the age differentiation hypothesis. Further analyses revealed that white matter microstructure, most prominently the superior longitudinal fasciculus, was strongly associated with crystallized (gc) and fluid (gf) abilities. Finally, we used SEM trees to demonstrate evidence for developmental reorganization of gc and gf and their white matter substrates such that the relationships among these factors dropped between 7-8 years before increasing around age 10. Together, our results suggest that shortly before puberty marks a pivotal phase of change in the neurocognitive architecture of intelligence
Recommended from our members
Erratum to "Neurocognitive reorganization between crystallized intelligence, fluid intelligence and white matter microstructure in two age-heterogeneous developmental cohorts" [Dev. Cogn. Neurosci. 41 (2020) 100743].
Recommended from our members
Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings
AbstractBrain networks exist within the confines of resource limitations. As a result, a brain network must overcome the metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. Here, to observe the effect of these processes, we introduce the spatially embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a three-dimensional Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs converge on structural and functional features that are also commonly found in primate cerebral cortices. Specifically, they converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically efficient mixed-selective code. Because these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs incorporate biophysical constraints within a fully artificial system and can serve as a bridge between structural and functional research communities to move neuroscientific understanding forwards.</jats:p
Fluid intelligence and naturalistic task impairments after focal brain lesions.
Classical executive tasks, such as Wisconsin card-sorting and verbal fluency, are widely used as tests of frontal lobe control functions. Since the pioneering work of Shallice and Burgess (1991), it has been known that complex, naturalistic tasks can capture deficits that are missed in these classical tests. Matching this finding, deficits in several classical tasks are predicted by loss of fluid intelligence, linked to damage in a specific cortical "multiple-demand" (MD) network, while deficits in a more naturalistic task are not. To expand on these previous results, we examined the effect of focal brain lesions on three new tests-a modification of the previously-used Hotel task, a new test of task switching after extended delays, and a test of decision-making in imagined real-life scenarios. As potential predictors of impairment we measured volume of damage to a priori MD and default mode (DMN) networks, as well as cortical damage outside these networks. Deficits in the three new tasks were substantial, but were not explained by loss of fluid intelligence, or by volume of damage to either MD or DMN networks. Instead, deficits were associated with diverse lesions, and not strongly correlated with one another. The results confirm that naturalistic tasks capture cognitive deficits beyond those measured by fluid intelligence. We suggest, however, that these deficits may not arise from specific control operations required by complex behaviour. Instead, like everyday activities, complex tasks combine a rich variety of interacting cognitive components, bringing many opportunities for processing to be disturbed